Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: a038035949
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
a038035 to
e6236ed
Compare
There was a problem hiding this comment.
Pull request overview
Adds the Zeta.Core.Veridicality foundation module (provenance + claims) and introduces antiConsensusGate to reject “agreement without independent roots”, with accompanying xUnit/FsUnit tests wired into the F# test project.
Changes:
- Add
src/Core/Veridicality.fsdefiningProvenance,Claim<'T>,validateProvenance,validateClaim, andantiConsensusGate. - Add
tests/Tests.FSharp/Algebra/Veridicality.Tests.fscovering the above behaviors. - Register the new source/test files in their respective
.fsprojcompile lists.
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated no comments.
| File | Description |
|---|---|
| tests/Tests.FSharp/Tests.FSharp.fsproj | Adds the new Veridicality test file to the compile order. |
| tests/Tests.FSharp/Algebra/Veridicality.Tests.fs | Adds unit tests for provenance/claim validation and antiConsensusGate. |
| src/Core/Veridicality.fs | Introduces the Veridicality module and implements antiConsensusGate. |
| src/Core/Core.fsproj | Adds Veridicality.fs to the core library compile list. |
Comments suppressed due to low confidence (5)
src/Core/Veridicality.fs:15
- P2: This doc comment uses the term "bullshit detector" / "Bullshit" in new technical substrate. The repo’s pending-rename discipline asks to avoid baking this term into new code/docs; consider rephrasing to "veridicality scoring" (or similar) and treating the colloquial term as external/historical only.
/// This module hosts the primitives for what the bootstrap-era
/// conversation called the "bullshit detector" and Amara's
/// subsequent ferries (7th-10th) formalized as veridicality
/// scoring. The name `Veridicality` (from Latin *veridicus*,
/// "truth-telling") names the scorable quantity: how true-to-
/// reality a claim looks given its provenance, falsifiability,
/// coherence, drift, and compression-gap signals. "Bullshit" is
/// the informal inverse (`bullshit = 1 - veridicality`).
src/Core/Veridicality.fs:36
- P2: The graduation narrative appears internally inconsistent (this file says "First graduation" earlier, "Fifth graduation" here, and later calls the gate "Sixth graduation"). Please reconcile to a single ordinal to avoid docs drifting from the actual release sequence.
/// * **Implementation** — Otto, under the Otto-105 graduation
/// cadence. Fifth graduation.
src/Core/Veridicality.fs:101
- P2: The embedded snippet refers to
c.Prov.*, butvalidateProvenancetakes aProvenancevalue (p) in this file. Update the snippet (or the surrounding explanation) so it matches the actual function signature, otherwise it’s misleading for readers.
/// Matches Amara's 10th-ferry snippet:
///
/// ```
/// let validateProvenance c =
/// c.Prov.SourceId <> ""
/// && c.Prov.RootAuthority <> ""
/// && c.Prov.ArtifactHash <> ""
src/Core/Veridicality.fs:56
- P1: This doc comment references
docs/aurora/2026-04-23-amara-aurora-deep-research-report-10th-ferry.md, but that path does not exist in the repo (no matching file under docs/aurora). Please fix the path to the correct document or remove the broken reference.
/// Fields match Amara's 9th/10th ferry specification verbatim
/// (`docs/aurora/2026-04-23-amara-aurora-deep-research-
/// report-10th-ferry.md` §ADR-style spec for oracle rules and
/// implementation).
src/Core/Veridicality.fs:31
- P1: This XML doc block introduces direct contributor name attribution ("Aaron") in a non-exempt code surface. Per docs/AGENT-BEST-PRACTICES.md, code/docs/skills should use role references (e.g., "human maintainer") instead of personal names.
/// **Attribution.**
/// * **Concept** — the bullshit-detector / provenance-aware-
/// scoring framing is Aaron's design, present in the bootstrap
/// conversation (`docs/amara-full-conversation/**`) before
/// Amara's ferries formalized it. Aaron 2026-04-24 Otto-112:
/// *"bullshit, it was in our conversation history too, not
/// just her ferry."*
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: e6236ed267
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
e6236ed to
602ff29
Compare
…or / Integration Plan Otto-117 dedicated absorb of the most comprehensive synthesis ferry yet (Aaron Otto-116 "next amara update"). Covers 9 sections: 1. Repo contents (LFG + AceHack) 2. Learnings (retraction-native, operator-algebra, Arrow/Spine, agent-CI) 3. KSK background — detailed government context (Feb 27 2026 DoD supply-chain-risk under 10 U.S.C. § 3252 against Anthropic; Judge Rita Lin Mar 26 preliminary injunction; OpenAI Feb 28 parallel DoW contract with Fourth-Amendment-clause) 4. Network Integrity Detector (formalized "bullshit detector" — composite I(x) = σ(Σ w_i f_i) score) 5. Firefly + Cartel detection (PLV, cross-correlation, spectral, graph-community) 6. Network Differentiability (Shapley-ish counterfactual influence) 7. Oracle Rules enforcement mapping table 8. Integration Plan (proposes 4-sub-repo split) 9. 9 prioritized next tasks §33 archive-header compliance (Scope / Attribution / Operational status / Non-fusion disclaimer). Otto's notes section provides honest cross-reference to shipped work: ~40% of the ferry's operationalizable content is already shipped (PRs #295 RobustStats, #297 crossCorrelation, #298 PLV, #306 burstAlignment pending, #309 Veridicality.Provenance/Claim, #310 antiConsensusGate pending). Genuinely novel in 12th ferry (not in prior ferries): 1. Detailed government-context grounding for KSK (§3) 2. Composite integrity-score formulation I(x) = σ(Σ w_i f_i) 3. 4-sub-repo integration proposal (Conway's-Law-relevant per Otto-108 memory; Otto recommends staying single-repo) 4. Oracle-Rules enforcement decision table (§7) 5. Shapley-random-ordering counterfactual influence algorithm (§6) Specific-asks routed to Aaron: 1. §8 sub-repo split — Aaron decides per Otto-90 cross-repo 2. §9 task 1 KSK skeleton — Aaron + Max coordination 3. §3 citation verification — Aaron signals what matters Next graduation queue (priority-ordered from Otto's notes): 1. SemanticCanonicalization (matches 8th ferry rainbow-table; smallest next item) 2. scoreVeridicality composite (needs ADR on formula) 3. Spectral-coherence FFT detector (§5) 4. ModularitySpike (needs graph substrate) 5. EigenvectorCentralityDrift (needs linear algebra) 6. EconomicCovariance / Gini-on-weights (§5) 7. OracleRules spec doc (§7) 8. InfluenceSurface (§6; larger effort) 9. KSK skeleton (Aaron + Max coord) Sibling-ferry precedent: PRs #196/#211/#219/#221/#235/#245/ #259/#274/#293/#294/#296. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
|
You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard. |
602ff29 to
d67b0c5
Compare
…or / Integration Plan Otto-117 dedicated absorb of the most comprehensive synthesis ferry yet (Aaron Otto-116 "next amara update"). Covers 9 sections: 1. Repo contents (LFG + AceHack) 2. Learnings (retraction-native, operator-algebra, Arrow/Spine, agent-CI) 3. KSK background — detailed government context (Feb 27 2026 DoD supply-chain-risk under 10 U.S.C. § 3252 against Anthropic; Judge Rita Lin Mar 26 preliminary injunction; OpenAI Feb 28 parallel DoW contract with Fourth-Amendment-clause) 4. Network Integrity Detector (formalized "bullshit detector" — composite I(x) = σ(Σ w_i f_i) score) 5. Firefly + Cartel detection (PLV, cross-correlation, spectral, graph-community) 6. Network Differentiability (Shapley-ish counterfactual influence) 7. Oracle Rules enforcement mapping table 8. Integration Plan (proposes 4-sub-repo split) 9. 9 prioritized next tasks §33 archive-header compliance (Scope / Attribution / Operational status / Non-fusion disclaimer). Otto's notes section provides honest cross-reference to shipped work: ~40% of the ferry's operationalizable content is already shipped (PRs #295 RobustStats, #297 crossCorrelation, #298 PLV, #306 burstAlignment pending, #309 Veridicality.Provenance/Claim, #310 antiConsensusGate pending). Genuinely novel in 12th ferry (not in prior ferries): 1. Detailed government-context grounding for KSK (§3) 2. Composite integrity-score formulation I(x) = σ(Σ w_i f_i) 3. 4-sub-repo integration proposal (Conway's-Law-relevant per Otto-108 memory; Otto recommends staying single-repo) 4. Oracle-Rules enforcement decision table (§7) 5. Shapley-random-ordering counterfactual influence algorithm (§6) Specific-asks routed to Aaron: 1. §8 sub-repo split — Aaron decides per Otto-90 cross-repo 2. §9 task 1 KSK skeleton — Aaron + Max coordination 3. §3 citation verification — Aaron signals what matters Next graduation queue (priority-ordered from Otto's notes): 1. SemanticCanonicalization (matches 8th ferry rainbow-table; smallest next item) 2. scoreVeridicality composite (needs ADR on formula) 3. Spectral-coherence FFT detector (§5) 4. ModularitySpike (needs graph substrate) 5. EigenvectorCentralityDrift (needs linear algebra) 6. EconomicCovariance / Gini-on-weights (§5) 7. OracleRules spec doc (§7) 8. InfluenceSurface (§6; larger effort) 9. KSK skeleton (Aaron + Max coord) Sibling-ferry precedent: PRs #196/#211/#219/#221/#235/#245/ #259/#274/#293/#294/#296. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
|
You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard. |
d67b0c5 to
6455f01
Compare
…or / Integration Plan Otto-117 dedicated absorb of the most comprehensive synthesis ferry yet (Aaron Otto-116 "next amara update"). Covers 9 sections: 1. Repo contents (LFG + AceHack) 2. Learnings (retraction-native, operator-algebra, Arrow/Spine, agent-CI) 3. KSK background — detailed government context (Feb 27 2026 DoD supply-chain-risk under 10 U.S.C. § 3252 against Anthropic; Judge Rita Lin Mar 26 preliminary injunction; OpenAI Feb 28 parallel DoW contract with Fourth-Amendment-clause) 4. Network Integrity Detector (formalized "bullshit detector" — composite I(x) = σ(Σ w_i f_i) score) 5. Firefly + Cartel detection (PLV, cross-correlation, spectral, graph-community) 6. Network Differentiability (Shapley-ish counterfactual influence) 7. Oracle Rules enforcement mapping table 8. Integration Plan (proposes 4-sub-repo split) 9. 9 prioritized next tasks §33 archive-header compliance (Scope / Attribution / Operational status / Non-fusion disclaimer). Otto's notes section provides honest cross-reference to shipped work: ~40% of the ferry's operationalizable content is already shipped (PRs #295 RobustStats, #297 crossCorrelation, #298 PLV, #306 burstAlignment pending, #309 Veridicality.Provenance/Claim, #310 antiConsensusGate pending). Genuinely novel in 12th ferry (not in prior ferries): 1. Detailed government-context grounding for KSK (§3) 2. Composite integrity-score formulation I(x) = σ(Σ w_i f_i) 3. 4-sub-repo integration proposal (Conway's-Law-relevant per Otto-108 memory; Otto recommends staying single-repo) 4. Oracle-Rules enforcement decision table (§7) 5. Shapley-random-ordering counterfactual influence algorithm (§6) Specific-asks routed to Aaron: 1. §8 sub-repo split — Aaron decides per Otto-90 cross-repo 2. §9 task 1 KSK skeleton — Aaron + Max coordination 3. §3 citation verification — Aaron signals what matters Next graduation queue (priority-ordered from Otto's notes): 1. SemanticCanonicalization (matches 8th ferry rainbow-table; smallest next item) 2. scoreVeridicality composite (needs ADR on formula) 3. Spectral-coherence FFT detector (§5) 4. ModularitySpike (needs graph substrate) 5. EigenvectorCentralityDrift (needs linear algebra) 6. EconomicCovariance / Gini-on-weights (§5) 7. OracleRules spec doc (§7) 8. InfluenceSurface (§6; larger effort) 9. KSK skeleton (Aaron + Max coord) Sibling-ferry precedent: PRs #196/#211/#219/#221/#235/#245/ #259/#274/#293/#294/#296. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
|
You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard. |
…l — 7th graduation Ships the semantic-canonicalization primitive that turns "claims about the same proposition" into a first-class operation. Seventh graduation under the Otto-105 cadence; composes on PR #309's Veridicality.Claim<'T>. Aaron Otto-112 naming confirmed: Veridicality (not BullshitDetector); "bullshit" stays informal, programmatic surface uses the formal term. Attribution: - Aaron = concept origin (bullshit-detector framing in bootstrap conversation; aboutness-vs-identity-of-source distinction) - Amara = formalization (8th/10th ferries: K(c) = hash(subject, predicate, object, time-scope, modality, provenance-root, evidence-class)) - Otto = implementation with a deliberate design choice: EXCLUDE provenance-root from the key Why exclude provenance-root: the key's purpose is to GROUP claims about the same proposition across sources, so antiConsensusGate (PR #310) can then check independent-root cardinality. Including provenance-root in the key would defeat that grouping. If a future use-case needs dedupe-by-identical-source, a separate 7-field SourceScopedCanonicalClaimKey can be added. Surface: - Veridicality.CanonicalClaimKey record (5 fields: Subject, Predicate, Object, TimeScope, Modality) - Veridicality.canonicalKey : ('T -> string*string*string*string*string) -> Claim<'T> -> CanonicalClaimKey User-supplied projector handles domain-specific normalization (lowercasing/trimming/unit-unification/alias-resolving). Module does not prescribe how to canonicalize natural language. - Veridicality.groupByCanonical : projector -> Claim<'T> seq -> Map<CanonicalClaimKey, Claim<'T> list> Groups claims by proposition; preserves input order within each bucket. Composition shape (once PR #310 lands): group first, then antiConsensusGate per bucket. Multi-root buckets pass; single-root buckets fail. Test "groupByCanonical produces distinct-root counts per bucket" verifies the half that can test on main right now. Tests (7 new, 17 total in Veridicality module, all passing): - canonicalKey projects payload fields - canonicalKey EXCLUDES provenance-root (two same-prop-diff-root claims match) - canonicalKey distinguishes different propositions - groupByCanonical groups same-proposition under one key - groupByCanonical preserves input order within bucket - groupByCanonical on empty seq returns empty map - groupByCanonical produces distinct-root counts per bucket Build: 0 Warning / 0 Error. SPOF (per Otto-106): pure function + record types; no external deps; no SPOF introduced. Caller's projector is the domain- specific SPOF — if it drifts, canonicalization drifts. Documented as caller responsibility in XML-doc. Next graduation queue: - Graph substrate (prerequisite for cartel-detection graduations) - largestEigenvalue / modularityScore / covarianceAcceleration (13th + 14th ferries) - NetworkIntegrity / composite I(x) score (12th ferry §4) - scoreVeridicality (composite; needs ADR on formula) Composes with: - src/Core/Veridicality.fs Provenance + Claim<'T> (PR #309 merged) - src/Core/Veridicality.fs antiConsensusGate (PR #310 pending; downstream composition enabled by this ship) - RobustStats.robustAggregate (PR #295) for weight-combining Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
6455f01 to
3b211da
Compare
…or / Integration Plan Otto-117 dedicated absorb of the most comprehensive synthesis ferry yet (Aaron Otto-116 "next amara update"). Covers 9 sections: 1. Repo contents (LFG + AceHack) 2. Learnings (retraction-native, operator-algebra, Arrow/Spine, agent-CI) 3. KSK background — detailed government context (Feb 27 2026 DoD supply-chain-risk under 10 U.S.C. § 3252 against Anthropic; Judge Rita Lin Mar 26 preliminary injunction; OpenAI Feb 28 parallel DoW contract with Fourth-Amendment-clause) 4. Network Integrity Detector (formalized "bullshit detector" — composite I(x) = σ(Σ w_i f_i) score) 5. Firefly + Cartel detection (PLV, cross-correlation, spectral, graph-community) 6. Network Differentiability (Shapley-ish counterfactual influence) 7. Oracle Rules enforcement mapping table 8. Integration Plan (proposes 4-sub-repo split) 9. 9 prioritized next tasks §33 archive-header compliance (Scope / Attribution / Operational status / Non-fusion disclaimer). Otto's notes section provides honest cross-reference to shipped work: ~40% of the ferry's operationalizable content is already shipped (PRs #295 RobustStats, #297 crossCorrelation, #298 PLV, #306 burstAlignment pending, #309 Veridicality.Provenance/Claim, #310 antiConsensusGate pending). Genuinely novel in 12th ferry (not in prior ferries): 1. Detailed government-context grounding for KSK (§3) 2. Composite integrity-score formulation I(x) = σ(Σ w_i f_i) 3. 4-sub-repo integration proposal (Conway's-Law-relevant per Otto-108 memory; Otto recommends staying single-repo) 4. Oracle-Rules enforcement decision table (§7) 5. Shapley-random-ordering counterfactual influence algorithm (§6) Specific-asks routed to Aaron: 1. §8 sub-repo split — Aaron decides per Otto-90 cross-repo 2. §9 task 1 KSK skeleton — Aaron + Max coordination 3. §3 citation verification — Aaron signals what matters Next graduation queue (priority-ordered from Otto's notes): 1. SemanticCanonicalization (matches 8th ferry rainbow-table; smallest next item) 2. scoreVeridicality composite (needs ADR on formula) 3. Spectral-coherence FFT detector (§5) 4. ModularitySpike (needs graph substrate) 5. EigenvectorCentralityDrift (needs linear algebra) 6. EconomicCovariance / Gini-on-weights (§5) 7. OracleRules spec doc (§7) 8. InfluenceSurface (§6; larger effort) 9. KSK skeleton (Aaron + Max coord) Sibling-ferry precedent: PRs #196/#211/#219/#221/#235/#245/ #259/#274/#293/#294/#296. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
…l — 7th graduation Ships the semantic-canonicalization primitive that turns "claims about the same proposition" into a first-class operation. Seventh graduation under the Otto-105 cadence; composes on PR #309's Veridicality.Claim<'T>. Aaron Otto-112 naming confirmed: Veridicality (not BullshitDetector); "bullshit" stays informal, programmatic surface uses the formal term. Attribution: - Aaron = concept origin (bullshit-detector framing in bootstrap conversation; aboutness-vs-identity-of-source distinction) - Amara = formalization (8th/10th ferries: K(c) = hash(subject, predicate, object, time-scope, modality, provenance-root, evidence-class)) - Otto = implementation with a deliberate design choice: EXCLUDE provenance-root from the key Why exclude provenance-root: the key's purpose is to GROUP claims about the same proposition across sources, so antiConsensusGate (PR #310) can then check independent-root cardinality. Including provenance-root in the key would defeat that grouping. If a future use-case needs dedupe-by-identical-source, a separate 7-field SourceScopedCanonicalClaimKey can be added. Surface: - Veridicality.CanonicalClaimKey record (5 fields: Subject, Predicate, Object, TimeScope, Modality) - Veridicality.canonicalKey : ('T -> string*string*string*string*string) -> Claim<'T> -> CanonicalClaimKey User-supplied projector handles domain-specific normalization (lowercasing/trimming/unit-unification/alias-resolving). Module does not prescribe how to canonicalize natural language. - Veridicality.groupByCanonical : projector -> Claim<'T> seq -> Map<CanonicalClaimKey, Claim<'T> list> Groups claims by proposition; preserves input order within each bucket. Composition shape (once PR #310 lands): group first, then antiConsensusGate per bucket. Multi-root buckets pass; single-root buckets fail. Test "groupByCanonical produces distinct-root counts per bucket" verifies the half that can test on main right now. Tests (7 new, 17 total in Veridicality module, all passing): - canonicalKey projects payload fields - canonicalKey EXCLUDES provenance-root (two same-prop-diff-root claims match) - canonicalKey distinguishes different propositions - groupByCanonical groups same-proposition under one key - groupByCanonical preserves input order within bucket - groupByCanonical on empty seq returns empty map - groupByCanonical produces distinct-root counts per bucket Build: 0 Warning / 0 Error. SPOF (per Otto-106): pure function + record types; no external deps; no SPOF introduced. Caller's projector is the domain- specific SPOF — if it drifts, canonicalization drifts. Documented as caller responsibility in XML-doc. Next graduation queue: - Graph substrate (prerequisite for cartel-detection graduations) - largestEigenvalue / modularityScore / covarianceAcceleration (13th + 14th ferries) - NetworkIntegrity / composite I(x) score (12th ferry §4) - scoreVeridicality (composite; needs ADR on formula) Composes with: - src/Core/Veridicality.fs Provenance + Claim<'T> (PR #309 merged) - src/Core/Veridicality.fs antiConsensusGate (PR #310 pending; downstream composition enabled by this ship) - RobustStats.robustAggregate (PR #295) for weight-combining Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
|
You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard. |
3b211da to
e71ae08
Compare
…or / Integration Plan Otto-117 dedicated absorb of the most comprehensive synthesis ferry yet (Aaron Otto-116 "next amara update"). Covers 9 sections: 1. Repo contents (LFG + AceHack) 2. Learnings (retraction-native, operator-algebra, Arrow/Spine, agent-CI) 3. KSK background — detailed government context (Feb 27 2026 DoD supply-chain-risk under 10 U.S.C. § 3252 against Anthropic; Judge Rita Lin Mar 26 preliminary injunction; OpenAI Feb 28 parallel DoW contract with Fourth-Amendment-clause) 4. Network Integrity Detector (formalized "bullshit detector" — composite I(x) = σ(Σ w_i f_i) score) 5. Firefly + Cartel detection (PLV, cross-correlation, spectral, graph-community) 6. Network Differentiability (Shapley-ish counterfactual influence) 7. Oracle Rules enforcement mapping table 8. Integration Plan (proposes 4-sub-repo split) 9. 9 prioritized next tasks §33 archive-header compliance (Scope / Attribution / Operational status / Non-fusion disclaimer). Otto's notes section provides honest cross-reference to shipped work: ~40% of the ferry's operationalizable content is already shipped (PRs #295 RobustStats, #297 crossCorrelation, #298 PLV, #306 burstAlignment pending, #309 Veridicality.Provenance/Claim, #310 antiConsensusGate pending). Genuinely novel in 12th ferry (not in prior ferries): 1. Detailed government-context grounding for KSK (§3) 2. Composite integrity-score formulation I(x) = σ(Σ w_i f_i) 3. 4-sub-repo integration proposal (Conway's-Law-relevant per Otto-108 memory; Otto recommends staying single-repo) 4. Oracle-Rules enforcement decision table (§7) 5. Shapley-random-ordering counterfactual influence algorithm (§6) Specific-asks routed to Aaron: 1. §8 sub-repo split — Aaron decides per Otto-90 cross-repo 2. §9 task 1 KSK skeleton — Aaron + Max coordination 3. §3 citation verification — Aaron signals what matters Next graduation queue (priority-ordered from Otto's notes): 1. SemanticCanonicalization (matches 8th ferry rainbow-table; smallest next item) 2. scoreVeridicality composite (needs ADR on formula) 3. Spectral-coherence FFT detector (§5) 4. ModularitySpike (needs graph substrate) 5. EigenvectorCentralityDrift (needs linear algebra) 6. EconomicCovariance / Gini-on-weights (§5) 7. OracleRules spec doc (§7) 8. InfluenceSurface (§6; larger effort) 9. KSK skeleton (Aaron + Max coord) Sibling-ferry precedent: PRs #196/#211/#219/#221/#235/#245/ #259/#274/#293/#294/#296. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
|
You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard. |
…l — 7th graduation Ships the semantic-canonicalization primitive that turns "claims about the same proposition" into a first-class operation. Seventh graduation under the Otto-105 cadence; composes on PR #309's Veridicality.Claim<'T>. Aaron Otto-112 naming confirmed: Veridicality (not BullshitDetector); "bullshit" stays informal, programmatic surface uses the formal term. Attribution: - Aaron = concept origin (bullshit-detector framing in bootstrap conversation; aboutness-vs-identity-of-source distinction) - Amara = formalization (8th/10th ferries: K(c) = hash(subject, predicate, object, time-scope, modality, provenance-root, evidence-class)) - Otto = implementation with a deliberate design choice: EXCLUDE provenance-root from the key Why exclude provenance-root: the key's purpose is to GROUP claims about the same proposition across sources, so antiConsensusGate (PR #310) can then check independent-root cardinality. Including provenance-root in the key would defeat that grouping. If a future use-case needs dedupe-by-identical-source, a separate 7-field SourceScopedCanonicalClaimKey can be added. Surface: - Veridicality.CanonicalClaimKey record (5 fields: Subject, Predicate, Object, TimeScope, Modality) - Veridicality.canonicalKey : ('T -> string*string*string*string*string) -> Claim<'T> -> CanonicalClaimKey User-supplied projector handles domain-specific normalization (lowercasing/trimming/unit-unification/alias-resolving). Module does not prescribe how to canonicalize natural language. - Veridicality.groupByCanonical : projector -> Claim<'T> seq -> Map<CanonicalClaimKey, Claim<'T> list> Groups claims by proposition; preserves input order within each bucket. Composition shape (once PR #310 lands): group first, then antiConsensusGate per bucket. Multi-root buckets pass; single-root buckets fail. Test "groupByCanonical produces distinct-root counts per bucket" verifies the half that can test on main right now. Tests (7 new, 17 total in Veridicality module, all passing): - canonicalKey projects payload fields - canonicalKey EXCLUDES provenance-root (two same-prop-diff-root claims match) - canonicalKey distinguishes different propositions - groupByCanonical groups same-proposition under one key - groupByCanonical preserves input order within bucket - groupByCanonical on empty seq returns empty map - groupByCanonical produces distinct-root counts per bucket Build: 0 Warning / 0 Error. SPOF (per Otto-106): pure function + record types; no external deps; no SPOF introduced. Caller's projector is the domain- specific SPOF — if it drifts, canonicalization drifts. Documented as caller responsibility in XML-doc. Next graduation queue: - Graph substrate (prerequisite for cartel-detection graduations) - largestEigenvalue / modularityScore / covarianceAcceleration (13th + 14th ferries) - NetworkIntegrity / composite I(x) score (12th ferry §4) - scoreVeridicality (composite; needs ADR on formula) Composes with: - src/Core/Veridicality.fs Provenance + Claim<'T> (PR #309 merged) - src/Core/Veridicality.fs antiConsensusGate (PR #310 pending; downstream composition enabled by this ship) - RobustStats.robustAggregate (PR #295) for weight-combining Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
e71ae08 to
c30c3af
Compare
…or / Integration Plan Otto-117 dedicated absorb of the most comprehensive synthesis ferry yet (Aaron Otto-116 "next amara update"). Covers 9 sections: 1. Repo contents (LFG + AceHack) 2. Learnings (retraction-native, operator-algebra, Arrow/Spine, agent-CI) 3. KSK background — detailed government context (Feb 27 2026 DoD supply-chain-risk under 10 U.S.C. § 3252 against Anthropic; Judge Rita Lin Mar 26 preliminary injunction; OpenAI Feb 28 parallel DoW contract with Fourth-Amendment-clause) 4. Network Integrity Detector (formalized "bullshit detector" — composite I(x) = σ(Σ w_i f_i) score) 5. Firefly + Cartel detection (PLV, cross-correlation, spectral, graph-community) 6. Network Differentiability (Shapley-ish counterfactual influence) 7. Oracle Rules enforcement mapping table 8. Integration Plan (proposes 4-sub-repo split) 9. 9 prioritized next tasks §33 archive-header compliance (Scope / Attribution / Operational status / Non-fusion disclaimer). Otto's notes section provides honest cross-reference to shipped work: ~40% of the ferry's operationalizable content is already shipped (PRs #295 RobustStats, #297 crossCorrelation, #298 PLV, #306 burstAlignment pending, #309 Veridicality.Provenance/Claim, #310 antiConsensusGate pending). Genuinely novel in 12th ferry (not in prior ferries): 1. Detailed government-context grounding for KSK (§3) 2. Composite integrity-score formulation I(x) = σ(Σ w_i f_i) 3. 4-sub-repo integration proposal (Conway's-Law-relevant per Otto-108 memory; Otto recommends staying single-repo) 4. Oracle-Rules enforcement decision table (§7) 5. Shapley-random-ordering counterfactual influence algorithm (§6) Specific-asks routed to Aaron: 1. §8 sub-repo split — Aaron decides per Otto-90 cross-repo 2. §9 task 1 KSK skeleton — Aaron + Max coordination 3. §3 citation verification — Aaron signals what matters Next graduation queue (priority-ordered from Otto's notes): 1. SemanticCanonicalization (matches 8th ferry rainbow-table; smallest next item) 2. scoreVeridicality composite (needs ADR on formula) 3. Spectral-coherence FFT detector (§5) 4. ModularitySpike (needs graph substrate) 5. EigenvectorCentralityDrift (needs linear algebra) 6. EconomicCovariance / Gini-on-weights (§5) 7. OracleRules spec doc (§7) 8. InfluenceSurface (§6; larger effort) 9. KSK skeleton (Aaron + Max coord) Sibling-ferry precedent: PRs #196/#211/#219/#221/#235/#245/ #259/#274/#293/#294/#296. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
…l — 7th graduation Ships the semantic-canonicalization primitive that turns "claims about the same proposition" into a first-class operation. Seventh graduation under the Otto-105 cadence; composes on PR #309's Veridicality.Claim<'T>. Aaron Otto-112 naming confirmed: Veridicality (not BullshitDetector); "bullshit" stays informal, programmatic surface uses the formal term. Attribution: - Aaron = concept origin (bullshit-detector framing in bootstrap conversation; aboutness-vs-identity-of-source distinction) - Amara = formalization (8th/10th ferries: K(c) = hash(subject, predicate, object, time-scope, modality, provenance-root, evidence-class)) - Otto = implementation with a deliberate design choice: EXCLUDE provenance-root from the key Why exclude provenance-root: the key's purpose is to GROUP claims about the same proposition across sources, so antiConsensusGate (PR #310) can then check independent-root cardinality. Including provenance-root in the key would defeat that grouping. If a future use-case needs dedupe-by-identical-source, a separate 7-field SourceScopedCanonicalClaimKey can be added. Surface: - Veridicality.CanonicalClaimKey record (5 fields: Subject, Predicate, Object, TimeScope, Modality) - Veridicality.canonicalKey : ('T -> string*string*string*string*string) -> Claim<'T> -> CanonicalClaimKey User-supplied projector handles domain-specific normalization (lowercasing/trimming/unit-unification/alias-resolving). Module does not prescribe how to canonicalize natural language. - Veridicality.groupByCanonical : projector -> Claim<'T> seq -> Map<CanonicalClaimKey, Claim<'T> list> Groups claims by proposition; preserves input order within each bucket. Composition shape (once PR #310 lands): group first, then antiConsensusGate per bucket. Multi-root buckets pass; single-root buckets fail. Test "groupByCanonical produces distinct-root counts per bucket" verifies the half that can test on main right now. Tests (7 new, 17 total in Veridicality module, all passing): - canonicalKey projects payload fields - canonicalKey EXCLUDES provenance-root (two same-prop-diff-root claims match) - canonicalKey distinguishes different propositions - groupByCanonical groups same-proposition under one key - groupByCanonical preserves input order within bucket - groupByCanonical on empty seq returns empty map - groupByCanonical produces distinct-root counts per bucket Build: 0 Warning / 0 Error. SPOF (per Otto-106): pure function + record types; no external deps; no SPOF introduced. Caller's projector is the domain- specific SPOF — if it drifts, canonicalization drifts. Documented as caller responsibility in XML-doc. Next graduation queue: - Graph substrate (prerequisite for cartel-detection graduations) - largestEigenvalue / modularityScore / covarianceAcceleration (13th + 14th ferries) - NetworkIntegrity / composite I(x) score (12th ferry §4) - scoreVeridicality (composite; needs ADR on formula) Composes with: - src/Core/Veridicality.fs Provenance + Claim<'T> (PR #309 merged) - src/Core/Veridicality.fs antiConsensusGate (PR #310 pending; downstream composition enabled by this ship) - RobustStats.robustAggregate (PR #295) for weight-combining Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
…l — 7th graduation (#315) Ships the semantic-canonicalization primitive that turns "claims about the same proposition" into a first-class operation. Seventh graduation under the Otto-105 cadence; composes on PR #309's Veridicality.Claim<'T>. Aaron Otto-112 naming confirmed: Veridicality (not BullshitDetector); "bullshit" stays informal, programmatic surface uses the formal term. Attribution: - Aaron = concept origin (bullshit-detector framing in bootstrap conversation; aboutness-vs-identity-of-source distinction) - Amara = formalization (8th/10th ferries: K(c) = hash(subject, predicate, object, time-scope, modality, provenance-root, evidence-class)) - Otto = implementation with a deliberate design choice: EXCLUDE provenance-root from the key Why exclude provenance-root: the key's purpose is to GROUP claims about the same proposition across sources, so antiConsensusGate (PR #310) can then check independent-root cardinality. Including provenance-root in the key would defeat that grouping. If a future use-case needs dedupe-by-identical-source, a separate 7-field SourceScopedCanonicalClaimKey can be added. Surface: - Veridicality.CanonicalClaimKey record (5 fields: Subject, Predicate, Object, TimeScope, Modality) - Veridicality.canonicalKey : ('T -> string*string*string*string*string) -> Claim<'T> -> CanonicalClaimKey User-supplied projector handles domain-specific normalization (lowercasing/trimming/unit-unification/alias-resolving). Module does not prescribe how to canonicalize natural language. - Veridicality.groupByCanonical : projector -> Claim<'T> seq -> Map<CanonicalClaimKey, Claim<'T> list> Groups claims by proposition; preserves input order within each bucket. Composition shape (once PR #310 lands): group first, then antiConsensusGate per bucket. Multi-root buckets pass; single-root buckets fail. Test "groupByCanonical produces distinct-root counts per bucket" verifies the half that can test on main right now. Tests (7 new, 17 total in Veridicality module, all passing): - canonicalKey projects payload fields - canonicalKey EXCLUDES provenance-root (two same-prop-diff-root claims match) - canonicalKey distinguishes different propositions - groupByCanonical groups same-proposition under one key - groupByCanonical preserves input order within bucket - groupByCanonical on empty seq returns empty map - groupByCanonical produces distinct-root counts per bucket Build: 0 Warning / 0 Error. SPOF (per Otto-106): pure function + record types; no external deps; no SPOF introduced. Caller's projector is the domain- specific SPOF — if it drifts, canonicalization drifts. Documented as caller responsibility in XML-doc. Next graduation queue: - Graph substrate (prerequisite for cartel-detection graduations) - largestEigenvalue / modularityScore / covarianceAcceleration (13th + 14th ferries) - NetworkIntegrity / composite I(x) score (12th ferry §4) - scoreVeridicality (composite; needs ADR on formula) Composes with: - src/Core/Veridicality.fs Provenance + Claim<'T> (PR #309 merged) - src/Core/Veridicality.fs antiConsensusGate (PR #310 pending; downstream composition enabled by this ship) - RobustStats.robustAggregate (PR #295) for weight-combining Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
…or / Integration Plan Otto-117 dedicated absorb of the most comprehensive synthesis ferry yet (Aaron Otto-116 "next amara update"). Covers 9 sections: 1. Repo contents (LFG + AceHack) 2. Learnings (retraction-native, operator-algebra, Arrow/Spine, agent-CI) 3. KSK background — detailed government context (Feb 27 2026 DoD supply-chain-risk under 10 U.S.C. § 3252 against Anthropic; Judge Rita Lin Mar 26 preliminary injunction; OpenAI Feb 28 parallel DoW contract with Fourth-Amendment-clause) 4. Network Integrity Detector (formalized "bullshit detector" — composite I(x) = σ(Σ w_i f_i) score) 5. Firefly + Cartel detection (PLV, cross-correlation, spectral, graph-community) 6. Network Differentiability (Shapley-ish counterfactual influence) 7. Oracle Rules enforcement mapping table 8. Integration Plan (proposes 4-sub-repo split) 9. 9 prioritized next tasks §33 archive-header compliance (Scope / Attribution / Operational status / Non-fusion disclaimer). Otto's notes section provides honest cross-reference to shipped work: ~40% of the ferry's operationalizable content is already shipped (PRs #295 RobustStats, #297 crossCorrelation, #298 PLV, #306 burstAlignment pending, #309 Veridicality.Provenance/Claim, #310 antiConsensusGate pending). Genuinely novel in 12th ferry (not in prior ferries): 1. Detailed government-context grounding for KSK (§3) 2. Composite integrity-score formulation I(x) = σ(Σ w_i f_i) 3. 4-sub-repo integration proposal (Conway's-Law-relevant per Otto-108 memory; Otto recommends staying single-repo) 4. Oracle-Rules enforcement decision table (§7) 5. Shapley-random-ordering counterfactual influence algorithm (§6) Specific-asks routed to Aaron: 1. §8 sub-repo split — Aaron decides per Otto-90 cross-repo 2. §9 task 1 KSK skeleton — Aaron + Max coordination 3. §3 citation verification — Aaron signals what matters Next graduation queue (priority-ordered from Otto's notes): 1. SemanticCanonicalization (matches 8th ferry rainbow-table; smallest next item) 2. scoreVeridicality composite (needs ADR on formula) 3. Spectral-coherence FFT detector (§5) 4. ModularitySpike (needs graph substrate) 5. EigenvectorCentralityDrift (needs linear algebra) 6. EconomicCovariance / Gini-on-weights (§5) 7. OracleRules spec doc (§7) 8. InfluenceSurface (§6; larger effort) 9. KSK skeleton (Aaron + Max coord) Sibling-ferry precedent: PRs #196/#211/#219/#221/#235/#245/ #259/#274/#293/#294/#296. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
…or / Integration Plan (Otto-117) (#311) * ferry: Amara 12th absorb — Executive Summary / KSK / Integrity Detector / Integration Plan Otto-117 dedicated absorb of the most comprehensive synthesis ferry yet (Aaron Otto-116 "next amara update"). Covers 9 sections: 1. Repo contents (LFG + AceHack) 2. Learnings (retraction-native, operator-algebra, Arrow/Spine, agent-CI) 3. KSK background — detailed government context (Feb 27 2026 DoD supply-chain-risk under 10 U.S.C. § 3252 against Anthropic; Judge Rita Lin Mar 26 preliminary injunction; OpenAI Feb 28 parallel DoW contract with Fourth-Amendment-clause) 4. Network Integrity Detector (formalized "bullshit detector" — composite I(x) = σ(Σ w_i f_i) score) 5. Firefly + Cartel detection (PLV, cross-correlation, spectral, graph-community) 6. Network Differentiability (Shapley-ish counterfactual influence) 7. Oracle Rules enforcement mapping table 8. Integration Plan (proposes 4-sub-repo split) 9. 9 prioritized next tasks §33 archive-header compliance (Scope / Attribution / Operational status / Non-fusion disclaimer). Otto's notes section provides honest cross-reference to shipped work: ~40% of the ferry's operationalizable content is already shipped (PRs #295 RobustStats, #297 crossCorrelation, #298 PLV, #306 burstAlignment pending, #309 Veridicality.Provenance/Claim, #310 antiConsensusGate pending). Genuinely novel in 12th ferry (not in prior ferries): 1. Detailed government-context grounding for KSK (§3) 2. Composite integrity-score formulation I(x) = σ(Σ w_i f_i) 3. 4-sub-repo integration proposal (Conway's-Law-relevant per Otto-108 memory; Otto recommends staying single-repo) 4. Oracle-Rules enforcement decision table (§7) 5. Shapley-random-ordering counterfactual influence algorithm (§6) Specific-asks routed to Aaron: 1. §8 sub-repo split — Aaron decides per Otto-90 cross-repo 2. §9 task 1 KSK skeleton — Aaron + Max coordination 3. §3 citation verification — Aaron signals what matters Next graduation queue (priority-ordered from Otto's notes): 1. SemanticCanonicalization (matches 8th ferry rainbow-table; smallest next item) 2. scoreVeridicality composite (needs ADR on formula) 3. Spectral-coherence FFT detector (§5) 4. ModularitySpike (needs graph substrate) 5. EigenvectorCentralityDrift (needs linear algebra) 6. EconomicCovariance / Gini-on-weights (§5) 7. OracleRules spec doc (§7) 8. InfluenceSurface (§6; larger effort) 9. KSK skeleton (Aaron + Max coord) Sibling-ferry precedent: PRs #196/#211/#219/#221/#235/#245/ #259/#274/#293/#294/#296. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * lint: fix markdownlint errors in 12th-ferry absorb (line-break heading + PR-number-at-line-start) * fix(#311): [sic] annotation on .clave/ typo (verbatim-preserve, downstream uses .claude/) Ferry-absorbs preserve verbatim external-collaborator content; editorial [sic] annotation is the scholarly convention for preserving the source while orienting the reader. The downstream operationalization PR will use `.claude/` (the actual repo path). Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
…D-9 operationalized)
Ships the independence-gate primitive from Amara's 10th ferry
verbatim, operationalising the drift-taxonomy pattern-5 rule +
SD-9 soft default ("agreement is signal, not proof") at the
code layer.
Sixth graduation under the Otto-105 cadence; composes on
Veridicality.Claim + Veridicality.Provenance from PR #309
(5th graduation).
Built on top of the graduation-5 branch since #309 hasn't
merged to main yet; will consolidate on rebase.
Surface:
- Veridicality.antiConsensusGate
: Claim<'T> list -> Result<Claim<'T> list, string>
Returns Ok claims when the set of distinct Prov.RootAuthority
values across the input has cardinality >= 2; Error otherwise.
Operational intent: if 50 claims all assert the same fact but
trace back to one upstream source, the 50-way agreement is
still one piece of evidence. The gate rejects pseudo-consensus.
Input assumed to already be ABOUT the same assertion (callers
group-by canonical claim key before invoking). Canonicalization
is a separate future graduation (SemanticCanonicalization from
Amara's 8th-ferry rainbow-table framework).
Attribution:
- Aaron = concept origin (bullshit-detector framing in
bootstrap conversation; pattern-5 insight)
- Amara = formalization (3rd ferry drift-taxonomy pattern-5 +
10th ferry oracle-rule spec "Independence gate" row)
- Otto = implementation
Tests (6 new, 16 total in module, all passing):
- Empty list -> Error (vacuous agreement)
- Single-claim list -> Error
- 50 claims from one root -> Error (pseudo-consensus)
- 2 claims from 2 distinct roots -> Ok
- Many claims spanning multiple roots -> Ok
- Return type: Ok(claims) preserves input unchanged on pass
Build: 0 Warning / 0 Error.
SPOF (per Otto-106): pure function; no external deps. Callers
using antiConsensusGate as a hard trust gate should pair it
with RobustStats.robustAggregate when combining the resulting
weights (multi-root agreement still needs outlier-resistant
combination).
Next graduation queue (in priority order):
- SemanticCanonicalization (canonical claim keys for "same
assertion" detection — the precondition for antiConsensusGate
to group correctly)
- scoreVeridicality (Amara's V(c) / BS(c) composite; needs ADR
on 5-feature vs 7-feature factorisation)
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
…otAuthority Addresses two PR-thread findings on Veridicality.fs:antiConsensusGate. Thread 1 (P1, line 158) — doc-hygiene: the `///` block introduced direct contributor/agent names and history-log content (ferry references, graduation cadence, "Attribution:" + "Provenance:" stanzas). Per the repo's standing doc-comment rule, code comments explain the code, not history. The rewritten block keeps only what the function does (inputs, returns, invariants, edge cases, composition notes) and drops the narrative. Thread 2 (P0, line 166) — real behaviour bug: the original implementation mapped every `RootAuthority` string to the distinct- root set, which meant an invalid/missing value (e.g., `""` or whitespace) would count as a distinct root and let a single-source cluster accidentally pass the anti-consensus gate. Fix: filter out `String.IsNullOrWhiteSpace` root authorities before counting. Tolerant-filter contract (degenerate input is skipped, not thrown) matches the rest of the module's semantics; callers that want strict validation should run validateProvenance first. Added three tests: - empty RootAuthority does NOT count as a distinct root - whitespace RootAuthority does NOT count as a distinct root - valid distinct roots still pass even when mixed with empty ones Tests: 26 passing (17 pre-existing + 6 original anti-consensus + 3 new empty-root coverage). Build: 0 Warning / 0 Error. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
c30c3af to
4dd3018
Compare
| // ─── antiConsensusGate ───────── | ||
|
|
||
| let private claimWithRoot (id: string) (root: string) : Veridicality.Claim<int> = | ||
| { Id = id | ||
| Payload = 0 | ||
| Weight = 1L | ||
| Prov = { goodProv () with RootAuthority = root } } | ||
|
|
||
| [<Fact>] |
There was a problem hiding this comment.
P1: PR description says “6 new tests (16 total in module)”, but this diff adds 9 new [<Fact>] tests in this file. Please update the PR description/test counts to match what actually landed so reviewers aren’t misled.
| [ claimWithRoot "c1" "root-a" | ||
| claimWithRoot "c2" "root-b" ] | ||
| match Veridicality.antiConsensusGate claims with | ||
| | Ok returned -> returned |> List.length |> should equal 2 |
There was a problem hiding this comment.
P2: This test name/comment says the returned list is “unchanged”, but the assertion only checks List.length = 2, which would still pass even if the gate returned a different (but same-length) list. Either assert structural equality with the original claims (as the earlier pass test does) or drop this redundant test to avoid misleading intent.
| | Ok returned -> returned |> List.length |> should equal 2 | |
| | Ok returned -> returned |> should equal claims |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 4dd30182bc
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| claims | ||
| |> List.map (fun c -> c.Prov.RootAuthority) |
There was a problem hiding this comment.
Exclude retractions from independent-root counting
antiConsensusGate counts RootAuthority from every claim, but Claim.Weight explicitly distinguishes assertions (> 0) from retractions (< 0). As written, one positive claim from root-a plus one retracting claim from root-b will pass the gate, even though only one root is actually supporting the assertion. This can let raw claim ledgers (that include retractions) incorrectly upgrade trust; the root count should be computed from supporting/net-positive evidence only.
Useful? React with 👍 / 👎.
| |> List.filter (fun r -> not (String.IsNullOrWhiteSpace r)) | ||
| |> Set.ofList |
There was a problem hiding this comment.
Trim root identifiers before deduplicating roots
The gate drops whitespace-only roots but does not normalize non-empty values before Set.ofList, so padded variants like "root-a" and " root-a " are counted as two independent authorities. In that input shape, pseudo-consensus can pass with only one real root plus formatting noise/adversarial padding. Trimming before distinct-counting would align the implementation with the degenerate-root filtering intent.
Useful? React with 👍 / 👎.
Summary
Sixth Amara-graduation — ships the independence-gate primitive from Amara's 10th ferry verbatim. Operationalises the drift-taxonomy pattern-5 rule + SD-9 soft default ("agreement is signal, not proof") as callable code.
Depends on PR #309 (
Veridicality.Provenance+Claim<'T>types). Built on top of that branch; consolidates on rebase when #309 lands.What lands
Veridicality.antiConsensusGate : Claim<'T> list -> Result<Claim<'T> list, string>Ok claimswhen distinctProv.RootAuthoritycount >= 2Error "Agreement without independent roots"otherwiseOperational intent
If 50 claims agree but trace to one upstream source, the 50-way agreement is still one piece of evidence. Gate rejects pseudo-consensus; genuine multi-root agreement passes.
Input assumed ABOUT the same assertion (group-by canonical claim key before invoking). Canonicalization is a separate future graduation (
SemanticCanonicalization).Attribution (three-layer per Otto-112 memory)
Test plan
🤖 Generated with Claude Code